Modelling Customer Churn using LightGBM

Go to Top

References

NOTE:

Load the libraries

Go to Top

Colab

Useful Scripts

Go to Top

Load the Data

Go to Top

Data Processing

Go to Top

Data Processing

Data Types

Train and Test Data

Numerical and Categorical Features

Custom Features

Train Validation Split

Lgb Dataset

lgb.Dataset(
label               = None,
reference           = None,
weight              = None,
group               = None,
init_score          = None,
silent              = False,
feature_name        = 'auto',
categorical_feature = 'auto',
params              = None,
free_raw_data       = True,
)

Modelling

Go to Top

lgb.LGBMClassifier(
    boosting_type     = 'gbdt',
    num_leaves        = 31,
    max_depth         = -1,
    learning_rate     = 0.1,
    n_estimators      = 100,
    subsample_for_bin = 200000,
    objective         = None,
    class_weight      = None,
    min_split_gain    = 0.0,
    min_child_weight  = 0.001,
    min_child_samples = 20,
    subsample         = 1.0,
    subsample_freq    = 0,
    colsample_bytree  = 1.0,
    reg_alpha         = 0.0,
    reg_lambda        = 0.0,
    random_state      = None,
    n_jobs            = -1,
    silent            = True,
    importance_type   = 'split',
    **kwargs,
)

--------------------------- model.fit
model.fit(
sample_weight         = None,
init_score            = None,
eval_set              = None,
eval_names            = None,
eval_sample_weight    = None,
eval_class_weight     = None,
eval_init_score       = None,
eval_metric           = None,
early_stopping_rounds = None,
verbose               = True,
feature_name          = 'auto',
categorical_feature   = 'auto',
callbacks             = None
)

LightGBM using booster method

LightGBM CV method

--------------------------- lgb.cv
lgb.cv(
params,
train_set,
num_boost_round       = 100,
folds                 = None,
nfold                 = 5,
stratified            = True,
shuffle               = True,
metrics               = None,
fobj                  = None,
feval                 = None,
init_model            = None,
feature_name          = 'auto',
categorical_feature   = 'auto',
early_stopping_rounds = None,
fpreproc              = None,
verbose_eval          = None,
show_stdv             = True,
seed                  = 0,
callbacks             = None,
eval_train_metric     = False,
)

LightGBM HPO Using Hyperopt

fmin(
fn,
space,
algo,
max_evals             = 9223372036854775807,
timeout               = None,
loss_threshold        = None,
trials                = None,
rstate                = None,
allow_trials_fmin     = True,
pass_expr_memo_ctrl   = None,
catch_eval_exceptions = False,
verbose               = True,
return_argmin         = True,
points_to_evaluate    = None,
max_queue_len         = 1,
show_progressbar      = True,

LightGBM HPO using hyperopt (sklearn method)

Model Evaluation

Go to Top

Model Evaluation using SHAP

Time Taken

Go to Top